AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Parameter-efficient fine-tuning

# Parameter-efficient fine-tuning

Deepseek R1 Distill Qwen 32B Lora R32
This is a LoRA adapter extracted from DeepSeek-R1-Distill-Qwen-32B, based on the Qwen2.5-32B base model, suitable for parameter-efficient fine-tuning.
Large Language Model Transformers
D
Naozumi0512
109
2
MOMENT 1 Small
MIT
MOMENT is a series of foundation models for general time series analysis, supporting various time series tasks with out-of-the-box effectiveness and performance improvements through fine-tuning.
Materials Science Transformers
M
AutonLab
38.03k
4
Lingowhale 8B
A Chinese-English bilingual large language model jointly open-sourced by DeepLang Tech and Tsinghua NLP Lab, pre-trained on trillions of high-quality tokens with 8K context window processing capability
Large Language Model Transformers Supports Multiple Languages
L
deeplang-ai
98
21
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase